-
Notifications
You must be signed in to change notification settings - Fork 15
feat: update base/openai chat model module #70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @jc200808, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces several enhancements and fixes to the base and OpenAI chat model modules. It focuses on improving the robustness of tool choice validation, refining audio data formatting for Qwen-omni models to ensure correct base64 prefixing and immutability, and enhancing the management of structured output requests for Qwen models by providing warnings and defaulting to a compatible JSON object format. These updates aim to improve the reliability and compatibility of the chat model integrations. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces several improvements to the chat model modules, such as using a set for TOOL_CHOICE_MODES for better performance, enhancing tool_choice validation with more robust checks and clearer error messages, and correctly handling structured output for Qwen models. The refactoring of _format_audio_data_for_qwen_omni to avoid side effects is a good direction, but the implementation is flawed as it uses a shallow copy and has suboptimal exception handling. More critically, there's a logic inversion in the achat method for tool_choice validation that will lead to runtime errors. I've provided suggestions to address these issues.
OpenJudge Version
[The version of OpenJudge you are working on, e.g.
import openjudge; print(openjudge.__version__)]Description
[Please describe the background, purpose, changes made, and how to test this PR]
Checklist
Please check the following items before code is ready to be reviewed.
pre-commit run --all-filescommand